library(tidyverse) # for data cleaning and plotting
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
## ✓ ggplot2 3.3.2 ✓ purrr 0.3.4
## ✓ tibble 3.0.4 ✓ dplyr 1.0.2
## ✓ tidyr 1.1.2 ✓ stringr 1.4.0
## ✓ readr 1.4.0 ✓ forcats 0.5.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
library(googlesheets4) # for reading googlesheet data
library(lubridate) # for date manipulation
##
## Attaching package: 'lubridate'
## The following objects are masked from 'package:base':
##
## date, intersect, setdiff, union
library(openintro) # for the abbr2state() function
## Loading required package: airports
## Loading required package: cherryblossom
## Loading required package: usdata
library(palmerpenguins)# for Palmer penguin data
library(maps) # for map data
##
## Attaching package: 'maps'
## The following object is masked from 'package:purrr':
##
## map
library(ggmap) # for mapping points on maps
## Google's Terms of Service: https://cloud.google.com/maps-platform/terms/.
## Please cite ggmap if you use it! See citation("ggmap") for details.
library(gplots) # for col2hex() function
##
## Attaching package: 'gplots'
## The following object is masked from 'package:stats':
##
## lowess
library(RColorBrewer) # for color palettes
library(sf) # for working with spatial data
## Linking to GEOS 3.8.1, GDAL 3.1.1, PROJ 6.3.1
library(leaflet) # for highly customizable mapping
library(carData) # for Minneapolis police stops data
library(ggthemes) # for more themes (including theme_map())
gs4_deauth() # To not have to authorize each time you knit.
theme_set(theme_minimal())
# Starbucks locations
Starbucks <- read_csv("https://www.macalester.edu/~ajohns24/Data/Starbucks.csv")
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Brand = col_character(),
## `Store Number` = col_character(),
## `Store Name` = col_character(),
## `Ownership Type` = col_character(),
## `Street Address` = col_character(),
## City = col_character(),
## `State/Province` = col_character(),
## Country = col_character(),
## Postcode = col_character(),
## `Phone Number` = col_character(),
## Timezone = col_character(),
## Longitude = col_double(),
## Latitude = col_double()
## )
starbucks_us_by_state <- Starbucks %>%
filter(Country == "US") %>%
count(`State/Province`) %>%
mutate(state_name = str_to_lower(abbr2state(`State/Province`)))
# Lisa's favorite St. Paul places - example for you to create your own data
favorite_stp_by_lisa <- tibble(
place = c("Home", "Macalester College", "Adams Spanish Immersion",
"Spirit Gymnastics", "Bama & Bapa", "Now Bikes",
"Dance Spectrum", "Pizza Luce", "Brunson's"),
long = c(-93.1405743, -93.1712321, -93.1451796,
-93.1650563, -93.1542883, -93.1696608,
-93.1393172, -93.1524256, -93.0753863),
lat = c(44.950576, 44.9378965, 44.9237914,
44.9654609, 44.9295072, 44.9436813,
44.9399922, 44.9468848, 44.9700727)
)
#COVID-19 data from the New York Times
covid19 <- read_csv("https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv")
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## date = col_date(format = ""),
## state = col_character(),
## fips = col_character(),
## cases = col_double(),
## deaths = col_double()
## )
If you were not able to get set up on GitHub last week, go here and get set up first. Then, do the following (if you get stuck on a step, don’t worry, I will help! You can always get started on the homework and we can figure out the GitHub piece later):
keep_md: TRUE in the YAML heading. The .md file is a markdown (NOT R Markdown) file that is an interim step to creating the html file. They are displayed fairly nicely in GitHub, so we want to keep it and look at it there. Click the boxes next to these two files, commit changes (remember to include a commit message), and push them (green up arrow).Put your name at the top of the document.
For ALL graphs, you should include appropriate labels.
Feel free to change the default theme, which I currently have set to theme_minimal().
Use good coding practice. Read the short sections on good code with pipes and ggplot2. This is part of your grade!
When you are finished with ALL the exercises, uncomment the options at the top so your document looks nicer. Don’t do it before then, or else you might miss some important warnings and messages.
These exercises will reiterate what you learned in the “Mapping data with R” tutorial. If you haven’t gone through the tutorial yet, you should do that first.
ggmap)Starbucks locations to a world map. Add an aesthetic to the world map that sets the color of the points according to the ownership type. What, if anything, can you deduce from this visualization?world <- get_stamenmap(
bbox = c(left = -180, bottom = -57, right = 179, top = 82.1),
maptype = "terrain",
zoom = 2)
## Source : http://tile.stamen.com/terrain/2/0/0.png
## Source : http://tile.stamen.com/terrain/2/1/0.png
## Source : http://tile.stamen.com/terrain/2/2/0.png
## Source : http://tile.stamen.com/terrain/2/3/0.png
## Source : http://tile.stamen.com/terrain/2/0/1.png
## Source : http://tile.stamen.com/terrain/2/1/1.png
## Source : http://tile.stamen.com/terrain/2/2/1.png
## Source : http://tile.stamen.com/terrain/2/3/1.png
## Source : http://tile.stamen.com/terrain/2/0/2.png
## Source : http://tile.stamen.com/terrain/2/1/2.png
## Source : http://tile.stamen.com/terrain/2/2/2.png
## Source : http://tile.stamen.com/terrain/2/3/2.png
ggmap(world) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude, color = `Ownership Type`),
alpha = .3,
size = .1) +
theme_map() +
theme(legend.background = element_blank()) +
labs(title = "Starbucks Locations: Ownership Type")
## Warning: Removed 1 rows containing missing values (geom_point).
It appears that North America is dominantly “company owned and”licensed“. North America also has a lot starbucks clustered on both coasts. South America has very few starbucks; they are also”company owned" and “licensed”. Europe is decently populated with “company owned”, “licensed”, and “joint venture”. Asia is clustered on its east coast with a lot of “company owned”, “joint venture”, and “licensed”. Africa and Australia have very few stores.
City <- get_stamenmap(
bbox = c(left = -93.60, bottom = 44.71, right = -92.92, top = 45.34),
maptype = "terrain",
zoom = 10)
## Source : http://tile.stamen.com/terrain/10/245/366.png
## Source : http://tile.stamen.com/terrain/10/246/366.png
## Source : http://tile.stamen.com/terrain/10/247/366.png
## Source : http://tile.stamen.com/terrain/10/245/367.png
## Source : http://tile.stamen.com/terrain/10/246/367.png
## Source : http://tile.stamen.com/terrain/10/247/367.png
## Source : http://tile.stamen.com/terrain/10/245/368.png
## Source : http://tile.stamen.com/terrain/10/246/368.png
## Source : http://tile.stamen.com/terrain/10/247/368.png
## Source : http://tile.stamen.com/terrain/10/245/369.png
## Source : http://tile.stamen.com/terrain/10/246/369.png
## Source : http://tile.stamen.com/terrain/10/247/369.png
ggmap(City) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude),
size = .6) +
theme_map() +
labs(title = "Twin Cities: Starbucks Locations")
## Warning: Removed 25459 rows containing missing values (geom_point).
The zoom number zooms in or out on the map. The higher the number the more it focuses. The lower the number the more it zooms out.
get_stamenmap() in help and look at maptype). Include a map with one of the other map types.City <- get_stamenmap(
bbox = c(left = -93.60, bottom = 44.71, right = -92.92, top = 45.34),
maptype = "watercolor",
zoom = 10)
## Source : http://tile.stamen.com/watercolor/10/245/366.jpg
## Source : http://tile.stamen.com/watercolor/10/246/366.jpg
## Source : http://tile.stamen.com/watercolor/10/247/366.jpg
## Source : http://tile.stamen.com/watercolor/10/245/367.jpg
## Source : http://tile.stamen.com/watercolor/10/246/367.jpg
## Source : http://tile.stamen.com/watercolor/10/247/367.jpg
## Source : http://tile.stamen.com/watercolor/10/245/368.jpg
## Source : http://tile.stamen.com/watercolor/10/246/368.jpg
## Source : http://tile.stamen.com/watercolor/10/247/368.jpg
## Source : http://tile.stamen.com/watercolor/10/245/369.jpg
## Source : http://tile.stamen.com/watercolor/10/246/369.jpg
## Source : http://tile.stamen.com/watercolor/10/247/369.jpg
ggmap(City) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude),
size = .6) +
theme_map() +
labs(title = "Twin Cities: Starbucks Locations")
## Warning: Removed 25459 rows containing missing values (geom_point).
annotate() function (see ggplot2 cheatsheet).City <- get_stamenmap(
bbox = c(left = -93.60, bottom = 44.71, right = -92.92, top = 45.34),
maptype = "terrain",
zoom = 10)
ggmap(City) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude),
size = .6) +
annotate(geom = "point",
x = -93.17,
y = 44.94,
color = "orange",
size = .5) +
annotate(geom = "text",
x = -93.17,
y = 44.93,
label = "Macalester College",
size = 2,
color = "Blue") +
theme_map() +
labs(title = "Twin Cities: Starbucks Locations")
## Warning: Removed 25459 rows containing missing values (geom_point).
### Choropleth maps with Starbucks data (
geom_map())
The example I showed in the tutorial did not account for population of each state in the map. In the code below, a new variable is created, starbucks_per_10000, that gives the number of Starbucks per 10,000 people. It is in the starbucks_with_2018_pop_est dataset.
census_pop_est_2018 <- read_csv("https://www.dropbox.com/s/6txwv3b4ng7pepe/us_census_2018_state_pop_est.csv?dl=1") %>%
separate(state, into = c("dot","state"), extra = "merge") %>%
select(-dot) %>%
mutate(state = str_to_lower(state))
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## state = col_character(),
## est_pop_2018 = col_double()
## )
starbucks_with_2018_pop_est <-
starbucks_us_by_state %>%
left_join(census_pop_est_2018,
by = c("state_name" = "state")) %>%
mutate(starbucks_per_10000 = (n/est_pop_2018)*10000)
dplyr review: Look through the code above and describe what each line of code does.The first line of code makes “census_pop_est_2018” represent the data set when you use it later in other lines. The second line of code makes
us_starbucks_locations <- Starbucks %>%
filter(Country == "US", ! `State/Province` %in% c ("HI", "AK")) %>%
select("State/Province", "Latitude", "Longitude")
states_map <- map_data("state")
starbucks_with_2018_pop_est %>%
ggplot() +
geom_map(map = states_map,
aes(map_id = state_name,
fill = starbucks_per_10000)) +
geom_point(data = us_starbucks_locations,
aes(x = Longitude,
y = Latitude),
color = "blue",
size = .1) +
expand_limits(x = states_map$long, y = states_map$lat) +
theme_map() +
labs(title = "Starbucks Location Density by State in the US",
fill = "Starbucks per 10000 people",
caption = "Nick Camp")
I observe that although the east coast, specifically New England has a large cluster of Starbucks locations, it is very populated. Therefore, the states are not the lightest shade meaning they have a high ratio of Starbucks per person. Whereas, the west coast also has clusters of starbucks but, specifically Washington, has a higher ratio of Starbucks locations per 10,000 people. One, outlier is Colorado. The midwest seems to be a darker shade, but Colorado has a cluster of dots and is very light. Overall, it may be that the west coast does not run on Dunkin’ like the east coast, specifically New England, does. ### A few of your favorite things (
leaflet)
my_favorite_places <- tibble(
place = c("Braydens House", "My House", "Yankee Stadium",
"Taft School", "New Jersey Devils Stadium", "Misquamicut Beach",
"Watch Hill, Rhode Island", "Block Island", "Barkhamsted Reservoir",
"Macalester College"),
long = c(-72.053240, -72.580016, -73.926262,
-73.124103, -74.171017, -71.802477,
-71.852768, -71.573941, -72.953079,
-93.1692095),
lat = c(41.438906, 41.960977, 40.829601,
41.603806, 40.733618, 41.322996,
41.308891, 41.183244, 41.927751,
44.9374234),
top3 = c(TRUE, FALSE, FALSE,
FALSE, FALSE, TRUE,
TRUE, FALSE, FALSE,
FALSE))
tibble() function that has 10-15 rows of your favorite places. The columns will be the name of the location, the latitude, the longitude, and a column that indicates if it is in your top 3 favorite locations or not. For an example of how to use tibble(), look at the favorite_stp_by_lisa I created in the data R code chunk at the beginning.pal_line <- colorFactor("viridis", domain = my_favorite_places$top3)
leaflet(data = my_favorite_places) %>%
addProviderTiles(providers$OpenStreetMap.DE) %>%
addCircles(lng = ~long,
lat = ~lat,
label = ~place,
weight = 10,
opacity = 1,
color = ~pal_line(top3)) %>%
addLegend(pal = pal_line,
values = ~top3,
"bottomright",
title = "Favorite 3 Places")
leaflet map that uses circles to indicate your favorite places. Label them with the name of the place. Choose the base map you like best. Color your 3 favorite places differently than the ones that are not in your top 3 (HINT: colorFactor()). Add a legend that explains what the colors mean.pal_line <- colorFactor("viridis", domain = my_favorite_places$top3)
leaflet(data = my_favorite_places) %>%
addProviderTiles(providers$OpenStreetMap.DE) %>%
addCircles(lng = ~long,
lat = ~lat,
label = ~place,
weight = 10,
opacity = 2,
color = ~pal_line(top3)) %>%
addPolylines(lng = ~long,
lat = ~lat,
weight = 2,
opacity = 1) %>%
addLegend(pal = pal_line,
values = ~top3,
"bottomright",
title = "Favorite 3 Places")
Connect all your locations together with a line in a meaningful way (you may need to order them differently in the original data).
If there are other variables you want to add that could enhance your plot, do that now.
This section will revisit some datasets we have used previously and bring in a mapping component.
The data come from Washington, DC and cover the last quarter of 2014.
Two data tables are available:
Trips contains records of individual rentalsStations gives the locations of the bike rental stationsHere is the code to read in the data. We do this a little differently than usualy, which is why it is included here rather than at the top of this file. To avoid repeatedly re-reading the files, start the data import chunk with {r cache = TRUE} rather than the usual {r}. This code reads in the large dataset right away.
data_site <-
"https://www.macalester.edu/~dshuman1/data/112/2014-Q4-Trips-History-Data.rds"
Trips <- readRDS(gzcon(url(data_site)))
Stations<-read_csv("http://www.macalester.edu/~dshuman1/data/112/DC-Stations.csv")
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## name = col_character(),
## lat = col_double(),
## long = col_double(),
## nbBikes = col_double(),
## nbEmptyDocks = col_double()
## )
Stations to make a visualization of the total number of departures from each station in the Trips data. Use either color or size to show the variation in number of departures. This time, plot the points on top of a map. Use any of the mapping tools you’d like.washington_dc <- get_stamenmap(bbox = c(left = -77.1701, bottom = 38.7813, right = -76.8485, top = 39.0131),
maptype = "terrain",
zoom = 11)
## Source : http://tile.stamen.com/terrain/11/584/782.png
## Source : http://tile.stamen.com/terrain/11/585/782.png
## Source : http://tile.stamen.com/terrain/11/586/782.png
## Source : http://tile.stamen.com/terrain/11/584/783.png
## Source : http://tile.stamen.com/terrain/11/585/783.png
## Source : http://tile.stamen.com/terrain/11/586/783.png
## Source : http://tile.stamen.com/terrain/11/584/784.png
## Source : http://tile.stamen.com/terrain/11/585/784.png
## Source : http://tile.stamen.com/terrain/11/586/784.png
departures_from_stations <- Trips %>%
left_join(Stations,
by = c("sstation" = "name")) %>%
group_by(lat, long) %>%
summarize(number_of_departures = n(), prop_casual = mean(client == "Casual"))
## `summarise()` regrouping output by 'lat' (override with `.groups` argument)
ggmap(washington_dc) +
geom_point(data = departures_from_stations,
aes(x = long, y = lat,
color = number_of_departures)) +
labs(x = "Longitude", y = "Lattitude") +
ggtitle("Washington DC: Departures at Bike Rental Stations")
## Warning: Removed 22 rows containing missing values (geom_point).
washington_dc <- get_stamenmap(bbox = c(left = -77.1701, bottom = 38.7813, right = -76.8485, top = 39.0131),
maptype = "terrain",
zoom = 11)
departures_from_stations_client <- Trips %>%
inner_join(Stations,
by = c("sstation" = "name")) %>%
select(sstation, lat, long, client) %>%
group_by(sstation) %>%
summarize(prop_cas = mean(client == "Casual"), lat, long)
## `summarise()` regrouping output by 'sstation' (override with `.groups` argument)
ggmap(washington_dc) +
geom_point(data = departures_from_stations_client,
aes(x = long, y = lat,
color = prop_cas)) +
labs(x = "Longitude", y = "Lattitude") +
ggtitle("Washington DC: Departures at Bike Rental Stations for Casual Riders")
## Warning: Removed 1246 rows containing missing values (geom_point).
On the map there is a cluster of lighter dots just a long the Lower Potomac River and in the center of the city. This would make sense if people were casually strolling by the river and decided the want to ride the bike down the river. It also makes sense because the dots are in the middle of the city where the population density is the highest. Therefore, with a higher proportion of people the number of casual users would increase.
The following exercises will use the COVID-19 data from the NYT.
states_map <- map_data("state")
covid19 %>%
group_by(state) %>%
summarize(total = max(cases)) %>%
mutate(state = str_to_lower(state)) %>%
ggplot() +
geom_map(map = states_map,
aes(map_id = state,
fill = total)) +
expand_limits(x = states_map$long, y = states_map$lat) +
labs(title = "Cumulative Recent Coronavirus Cases",
fill = "Cumulative Cases") +
theme_map()
## `summarise()` ungrouping output (override with `.groups` argument)
I see that Florida, Texas, and California have the highest number of recent cumulative cases. the Midwest, New England, and Northwest appear to have lower cases than other regions. Overall, the problem with the map is that it is showing recent cumulative cases, which does not make sense. If its cumulative it should not be recent because cumulative shows total cases over the entire pandemic. The title sounds confusing and contradicting. It also does not represent the populations of the states. For example, Wyoming is less populated than California, so just because California has way more cases does not mean they are handling the virus more poorly than Wyoming.
census_pop_est_2018 <- read_csv("https://www.dropbox.com/s/6txwv3b4ng7pepe/us_census_2018_state_pop_est.csv?dl=1") %>%
separate(state, into = c("dot","state"), extra = "merge") %>%
select(-dot) %>%
mutate(state = str_to_lower(state))
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## state = col_character(),
## est_pop_2018 = col_double()
## )
covid19 %>%
group_by(state) %>%
summarize(total = max(cases)) %>%
mutate(state = str_to_lower(state)) %>%
left_join(census_pop_est_2018,
by = c("state" = "state")) %>%
mutate(cases_per_10000 = (total/est_pop_2018)*10000) %>%
ggplot() +
geom_map(map = states_map,
aes(map_id = state,
fill = cases_per_10000)) +
expand_limits(x = states_map$long, y = states_map$lat) +
labs(title = "Cumulative Recent Coronavirus Cases per 10000 People",
fill = "Cases per 10000") +
theme_map()
## `summarise()` ungrouping output (override with `.groups` argument)
These exercises use the datasets MplsStops and MplsDemo from the carData library. Search for them in Help to find out more information.
MplsStops dataset to find out how many stops there were for each neighborhood and the proportion of stops that were for a suspicious vehicle or person. Sort the results from most to least number of stops. Save this as a dataset called mpls_suspicious and display the table.MplsStops %>%
group_by(neighborhood) %>%
summarize(number_stops = n(),
n_suspicious = sum(problem == "suspicious"),
proportion_suspicious = n_suspicious/number_stops) %>%
arrange(desc(number_stops))
## `summarise()` ungrouping output (override with `.groups` argument)
mpls_suspicious <- MplsStops %>%
group_by(neighborhood) %>%
summarize(number_stops = n(),
n_suspicious = sum(problem == "suspicious"),
proportion_suspicious = n_suspicious/number_stops) %>%
arrange(desc(number_stops))
## `summarise()` ungrouping output (override with `.groups` argument)
leaflet map and the MplsStops dataset to display each of the stops on a map as a small point. Color the points differently depending on whether they were for suspicious vehicle/person or a traffic stop (the problem variable). HINTS: use addCircleMarkers, set stroke = FAlSE, use colorFactor() to create a palette.pal_problem <- colorFactor("viridus",
domain = MplsStops$problem)
leaflet(MplsStops) %>%
addProviderTiles(providers$Stamen.Toner) %>%
addCircles(lng = ~long,
lat = ~lat,
color = ~pal_problem(problem),
weight = .6,
opacity = .3) %>%
addLegend(pal = pal_problem,
values = ~problem)
eval=FALSE. Although it looks like it only links to the .sph file, you need the entire folder of files to create the mpls_nbhd data set. These data contain information about the geometries of the Minneapolis neighborhoods. Using the mpls_nbhd dataset as the base file, join the mpls_suspicious and MplsDemo datasets to it by neighborhood (careful, they are named different things in the different files). Call this new dataset mpls_all.mpls_nbhd <- st_read("Minneapolis_Neighborhoods/Minneapolis_Neighborhoods.shp", quiet = TRUE)
mpls_all <- mpls_nbhd %>%
left_join(mpls_suspicious,
by = c("BDNAME" = "neighborhood")) %>%
left_join(MplsDemo,
by = c("BDNAME" = "neighborhood"))
Use leaflet to create a map from the mpls_all data that colors the neighborhoods by prop_suspicious. Display the neighborhood name as you scroll over it. Describe what you observe in the map.
Use leaflet to create a map of your own choosing. Come up with a question you want to try to answer and use the map to help answer that question. Describe what your map shows.
DID YOU REMEMBER TO UNCOMMENT THE OPTIONS AT THE TOP?